20 research outputs found

    Quantum Computing for Machine Learning and Physics Simulation

    Get PDF
    Quantum computing is widely thought to provide exponential speedups over classical algorithms for a variety of computational tasks. In classical computing, methods in artificial intelligence such as neural networks and adversarial learning have enabled drastic improvements in state-of-the-art performance for a variety of tasks. We consider the intersection of quantum computing with machine learning, including the quantum algorithms for deep learning on classical datasets, quantum adversarial learning for quantum states, and variational quantum machine learning for improved physics simulation. We consider a standard deep neural network architecture and show that conditions amenable to trainability by gradient descent coincide with those necessary for an efficient quantum algorithm. Considering the neural network in the infinite-width limit using the neural tangent kernel formalism, we propose a quantum algorithm to train the neural network with vanishing error as the training dataset size increases. Under a sparse approximation of the neural tangent kernel, the training time scales logarithmically with the number of training examples, providing the first known exponential quantum speedup for feedforward neural networks. Related approximations to the neural tangent kernel are discussed, with numerical studies showing successful convergence beyond the proven regime. Our work suggests the applicability of the quantum computing to additional neural network architectures and common datasets such as MNIST, as well as kernel methods beyond the neural tangent kernel. Generative adversarial networks (GANs) are one of the most widely adopted machine learning methods for data generation. We propose an entangling quantum GAN (EQ-GAN) that overcomes some limitations of previously proposed quantum GANs. EQ-GAN guarantees the convergence to a Nash equilibrium under minimax optimization of the discriminator and generator circuits by performing entangling operations between both the generator output and true quantum data. We show that EQ-GAN has additional robustness against coherent errors and demonstrate the effectiveness of EQ-GAN experimentally in a Google Sycamore superconducting quantum processor. By adversarially learning efficient representations of quantum states, we prepare an approximate quantum random access memory and demonstrate its use in applications including the training of near-term quantum neural networks. With quantum computers providing a natural platform for physics simulation, we investigate the use of variational quantum circuits to simulate many-body systems with high fidelity in the near future. In particular, recent work shows that teleportation caused by introducing a weak coupling between two entangled SYK models is dual to a particle traversing an AdS-Schwarzschild wormhole, providing a mechanism to probe quantum gravity theories in the lab. To simulate such a system, we propose the process of compressed Trotterization to improve the fidelity of time evolution on noisy devices. The task of learning approximate time evolution circuits is shown to have a favorable training landscape, and numerical experiments demonstrate its relevance to simulating other many-body systems such as a Fermi-Hubbard model. For the SYK model in particular, we demonstrate the construction of a low-rank approximation that favors a shallower Trotterization. Finally, classical simulations of finite-N SYK models suggest that teleportation via a traversable wormhole instead of random unitary scrambling is achievable with O(20) qubits, providing further indication that such quantum gravity experiments may realizable with near-term quantum hardware.</p

    Boundaries of quantum supremacy via random circuit sampling

    Full text link
    Google's recent quantum supremacy experiment heralded a transition point where quantum computing performed a computational task, random circuit sampling, that is beyond the practical reach of modern supercomputers. We examine the constraints of the observed quantum runtime advantage in an analytical extrapolation to circuits with a larger number of qubits and gates. Due to the exponential decrease of the experimental fidelity with the number of qubits and gates, we demonstrate for current fidelities a theoretical classical runtime advantage for circuits beyond a depth of 100, while quantum runtimes for cross-entropy benchmarking limit the region of a quantum advantage to around 300 qubits. However, the quantum runtime advantage boundary grows exponentially with reduced error rates, and our work highlights the importance of continued progress along this line. Extrapolations of measured error rates suggest that the limiting circuit size for which a computationally feasible quantum runtime advantage in cross-entropy benchmarking can be achieved approximately coincides with expectations for early implementations of the surface code and other quantum error correction methods. Thus the boundaries of quantum supremacy via random circuit sampling may fortuitously coincide with the advent of scalable, error corrected quantum computing in the near term.Comment: 8 pages, 3 figure

    Quantum adiabatic machine learning by zooming into a region of the energy surface

    Get PDF
    Recent work has shown that quantum annealing for machine learning, referred to as QAML, can perform comparably to state-of-the-art machine learning methods with a specific application to Higgs boson classification. We propose QAML-Z, an algorithm that iteratively zooms in on a region of the energy surface by mapping the problem to a continuous space and sequentially applying quantum annealing to an augmented set of weak classifiers. Results on a programmable quantum annealer show that QAML-Z matches classical deep neural network performance at small training set sizes and reduces the performance margin between QAML and classical deep neural networks by almost 50% at large training set sizes, as measured by area under the receiver operating characteristic curve. The significant improvement of quantum annealing algorithms for machine learning and the use of a discrete quantum algorithm on a continuous optimization problem both opens a class of problems that can be solved by quantum annealers and suggests the approach in performance of near-term quantum machine learning towards classical benchmarks

    Quantum adiabatic machine learning by zooming into a region of the energy surface

    Get PDF
    Recent work has shown that quantum annealing for machine learning, referred to as QAML, can perform comparably to state-of-the-art machine learning methods with a specific application to Higgs boson classification. We propose QAML-Z, an algorithm that iteratively zooms in on a region of the energy surface by mapping the problem to a continuous space and sequentially applying quantum annealing to an augmented set of weak classifiers. Results on a programmable quantum annealer show that QAML-Z matches classical deep neural network performance at small training set sizes and reduces the performance margin between QAML and classical deep neural networks by almost 50% at large training set sizes, as measured by area under the receiver operating characteristic curve. The significant improvement of quantum annealing algorithms for machine learning and the use of a discrete quantum algorithm on a continuous optimization problem both opens a class of problems that can be solved by quantum annealers and suggests the approach in performance of near-term quantum machine learning towards classical benchmarks

    Charged particle tracking with quantum annealing-inspired optimization

    Get PDF
    At the High Luminosity Large Hadron Collider (HL-LHC), traditional track reconstruction techniques that are critical for analysis are expected to face challenges due to scaling with track density. Quantum annealing has shown promise in its ability to solve combinatorial optimization problems amidst an ongoing effort to establish evidence of a quantum speedup. As a step towards exploiting such potential speedup, we investigate a track reconstruction approach by adapting the existing geometric Denby-Peterson (Hopfield) network method to the quantum annealing framework and to HL-LHC conditions. Furthermore, we develop additional techniques to embed the problem onto existing and near-term quantum annealing hardware. Results using simulated annealing and quantum annealing with the D-Wave 2X system on the TrackML dataset are presented, demonstrating the successful application of a quantum annealing-inspired algorithm to the track reconstruction challenge. We find that combinatorial optimization problems can effectively reconstruct tracks, suggesting possible applications for fast hardware-specific implementations at the LHC while leaving open the possibility of a quantum speedup for tracking

    Comment on "Comment on "Traversable wormhole dynamics on a quantum processor" "

    Full text link
    We observe that the comment of [1, arXiv:2302.07897] is consistent with [2] on key points: i) the microscopic mechanism of the experimentally observed teleportation is size winding and ii) the system thermalizes and scrambles at the time of teleportation. These properties are consistent with a gravitational interpretation of the teleportation dynamics, as opposed to the late-time dynamics. The objections of [1] concern counterfactual scenarios outside of the experimentally implemented protocol.Comment: 5 pages, 4 figure

    Graph Neural Networks for Particle Reconstruction in High Energy Physics detectors

    Get PDF
    Pattern recognition problems in high energy physics are notably different from traditional machine learning applications in computer vision. Reconstruction algorithms identify and measure the kinematic properties of particles produced in high energy collisions and recorded with complex detector systems. Two critical applications are the reconstruction of charged particle trajectories in tracking detectors and the reconstruction of particle showers in calorimeters. These two problems have unique challenges and characteristics, but both have high dimensionality, high degree of sparsity, and complex geometric layouts. Graph Neural Networks (GNNs) are a relatively new class of deep learning architectures which can deal with such data effectively, allowing scientists to incorporate domain knowledge in a graph structure and learn powerful representations leveraging that structure to identify patterns of interest. In this work we demonstrate the applicability of GNNs to these two diverse particle reconstruction problems
    corecore